2,664 research outputs found

    Effect of gate voltage on spin transport along α\alpha-helical protein

    Full text link
    Recently, the chiral-induced spin selectivity in molecular systems has attracted extensive interest among the scientific communities. Here, we investigate the effect of the gate voltage on spin-selective electron transport through the α\alpha-helical peptide/protein molecule contacted by two nonmagnetic electrodes. Based on an effective model Hamiltonian and the Landauer-B\"uttiker formula, we calculate the conductance and the spin polarization under an external electric field which is perpendicular to the helix axis of the α\alpha-helical peptide/protein molecule. Our results indicate that both the magnitude and the direction of the gate field have a significant effect on the conductance and the spin polarization. The spin filtration efficiency can be improved by properly tuning the gate voltage, especially in the case of strong dephasing regime. And the spin polarization increases monotonically with the molecular length without the gate voltage, which is consistent with the recent experiment, and presents oscillating behavior in the presence of the gate voltage. In addition, the spin selectivity is robust against the dephasing, the on-site energy disorder, and the space angle disorder under the gate voltage. Our results could motivate further experimental and theoretical works on the chiral-based spin selectivity in molecular systems.Comment: 8 pages, 7 figure

    Factors that influence online learners\u27 intent to continue in an online graduate program

    Get PDF
    The primary purpose of this study was to determine the factors that influence online learners’ intent to continue. This study gathered the data from the University of Arkansas, Fayetteville, and Nicholls State University. The total number of participants was n=122. The findings in this study revealed a positive relationship between online learners’ perceived usefulness and intent to continue (r=.37, p\u3c 0.01), a positive relationship between online learners’ perceived ease of use and intent to continue (r=.44, p\u3c 0.01), a positive relationship between online learners’ perceived flexibility and intent to continue (r=.72, p\u3c 0.01), a positive relationship between online learners’ perceived learner-instructor interaction and intent to continue (r=.52, p\u3c 0.01), and a positive relationship between online learners’ satisfaction and intent to continue (r=.84, p\u3c 0.01). Moreover, the findings showed a negative relationship between online learners’ perceived learner-learner interaction and intent to continue (r= -.27, p\u3c 0.01). Although the learner-learner interaction questionnaire used negative description, it still indicated a positive relationship between perceived learner-learner interaction and online learners’ intent to continue. The Multiple Regression Analysis (MRA) revealed that the perceived flexibility and satisfaction had positive influence on the online learners’ intent to continue, and the value of R2 further revealed that the two predictor variables explained 76.4 % of the variance in the online learners’ intent to continue

    Jointly Modeling Embedding and Translation to Bridge Video and Language

    Full text link
    Automatically describing video content with natural language is a fundamental challenge of multimedia. Recurrent Neural Networks (RNN), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. Our proposed LSTM-E consists of three components: a 2-D and/or 3-D deep convolutional neural networks for learning powerful video representation, a deep RNN for generating sentences, and a joint embedding model for exploring the relationships between visual content and sentence semantics. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best reported performance in generating natural sentences: 45.3% and 31.0% in terms of BLEU@4 and METEOR, respectively. We also demonstrate that LSTM-E is superior in predicting Subject-Verb-Object (SVO) triplets to several state-of-the-art techniques
    • …
    corecore